24 research outputs found

    A color hand gesture database for evaluating and improving algorithms on hand gesture and posture recognition

    Get PDF
    With the increase of research activities in vision-based hand posture and gesture recognition, new methods and algorithms are being developed. Although less attention is being paid to developing a standard platform for this purpose. Developing a database of hand gesture images is a necessary first step for standardizing the research on hand gesture recognition. For this purpose, we have developed an image database of hand posture and gesture images. The database contains hand images in different lighting conditions and collected using a digital camera. Details of the automatic segmentation and clipping of the hands are also discussed in this paper

    Indoor emission sources detection by pollutants interaction analysis

    Get PDF
    This study employs the correlation coefficients technique to support emission sources detection for indoor environments. Unlike existing methods analyzing merely primary pollution, we consider alternatively the secondary pollution (i.e., chemical reactions between pollutants in addition to pollutant level), and calculate intra pollutants correlation coefficients for characterizing and distinguishing emission events. Extensive experiments show that seven major indoor emission sources are identified by the proposed method, including (1) frying canola oil on electric hob, (2) frying olive oil on an electric hob, (3) frying olive oil on a gas hob, (4) spray of household pesticide, (5) lighting a cigarette and allowing it to smoulder, (6) no activities, and (7) venting session. Furthermore, our method improves the detection accuracy by a support vector machine compared to without data filtering and applying typical feature extraction methods such as PCA and LDA. © 2021 by the authors. Licensee MDPI, Basel, Switzerland

    STRATUS: Towards returning data control to cloud users

    Get PDF
    When we upload or create data into the cloud or the web, we immediately lose control of our data. Most of the time, we will not know where the data will be stored, or how many copies of our files are there. Worse, we are unable to know and stop malicious insiders from accessing the possibly sensitive data. Despite being transferred across and within clouds over encrypted channels, data often has to be decrypted within the database for it to be processed. Exposing the data at some point in the cloud to a few privileged users is undoubtedly a vendor-centric approach, and hinges on the trust relationships data owners have with their cloud service providers. A recent example of the abuse of the trust relationship is the high-profile Edward Snowden case. In this paper, we propose a user-centric approach which returns data control to the data owners – empowering users with data provenance, transparency and auditability, homomorphic encryption, situation awareness, revocation, attribution and data resilience. We also cover key elements of the concept of user data control. Finally, we introduce how we attempt to address these issues via the New Zealand Ministry of Business Innovation and Employment (MBIE)-funded STRATUS (Security Technologies Returning Accountability, Trust and User-centric Services in the Cloud) research project

    Knowledge based expert systems.

    No full text

    A hypertext based intelligent assistant for courseware preparation: a step toward authoring shells

    Get PDF
    Existing intelligent tutoring systems are mostly academic prototypes built for research purposes. The main reason for this is that they are time consuming and expensive to build. Another reason is that there are no tools that can be used for building such systems for teachers who usually are not programmers. In this study we have proposed authoring shells as a tool for building intelligent tutoring systems to be used as a goal for research in this area. As a step toward authoring shells, we have designed and implemented a hypertext based intelligent assistant (IACP) for developing courseware meant for use by teachers. To make IACP possible we developed a model for representing domain structure and a methodology for domain structure elicitation. The model we have developed is called the concept relationship model. The CR model is constructed through interviews with a domain and subject matter expert using the methodology developed in this study. A CR model was constructed, trialled and its validity demonstrated. The CR model can be used for automatic hypertext linking. This makes generation of hypertexts, both for the purpose of courseware preparation and self exploratory learning, possible. We have also introduced the notion of intelligent links which turn the hypertext into a semi-guided learning environment more suitable for learning. Based on the CR model, student modelling can be done. IACP generates a student modelling component with every collection of course material it retrieves, structures and links together. The CR model has the potential to be used as a hypermedia tool. Although we have developed the CR model for the purpose of courseware development it can be used for general hypertext generation. It solves the problem of disorientation by providing a higher layer of links above the document level. Teachers can use IACP for preparing course material. Students can explore the material structured by the system for the purpose of learning. An expert\u27s knowledge of the domain structure is input to IACP by a knowledge engineer

    Improving Persian-English Statistical Machine Translation: Experiments in Domain Adaptation

    No full text
    This paper documents recent work carried out for PeEn-SMT, our Statistical Machine Translation system for translation between the English-Persian language pair. We give details of our previous SMT system, and present our current development of significantly larger corpora. We explain how recent tests using much larger corpora helped to evaluate problems in parallel corpus alignment, corpus content, and how matching the domains of PeEn-SMT’s components affect translation output. We then focus on combining corpora and approaches to improve test data, showing details of experimental setup, together with a number of experiment results and comparisons between them. We show how one combination of corpora gave us a metric score outperforming Google Translate for the English-to-Persian translation. Finally, we outline areas of our intended future work, and how we plan to improve the performance of our system to achieve higher metric scores, and ultimately to provide accurate, reliable language translation
    corecore